Parsippany
The iToBoS dataset: skin region images extracted from 3D total body photographs for lesion detection
Saha, Anup, Adeola, Joseph, Ferrera, Nuria, Mothershaw, Adam, Rezze, Gisele, Gaborit, Séraphin, D'Alessandro, Brian, Hudson, James, Szabó, Gyula, Pataki, Balazs, Rajani, Hayat, Nazari, Sana, Hayat, Hassan, Primiero, Clare, Soyer, H. Peter, Malvehy, Josep, Garcia, Rafael
Artificial intelligence has significantly advanced skin cancer diagnosis by enabling rapid and accurate detection of malignant lesions. In this domain, most publicly available image datasets consist of single, isolated skin lesions positioned at the center of the image. While these lesion-centric datasets have been fundamental for developing diagnostic algorithms, they lack the context of the surrounding skin, which is critical for improving lesion detection. The iToBoS dataset was created to address this challenge. It includes 16,954 images of skin regions from 100 participants, captured using 3D total body photography. Each image roughly corresponds to a $7 \times 9$ cm section of skin with all suspicious lesions annotated using bounding boxes. Additionally, the dataset provides metadata such as anatomical location, age group, and sun damage score for each image. This dataset aims to facilitate training and benchmarking of algorithms, with the goal of enabling early detection of skin cancer and deployment of this technology in non-clinical environments.
- Oceania > Australia > Queensland > Brisbane (0.14)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.05)
- Europe > Italy > Friuli Venezia Giulia > Trieste Province > Trieste (0.04)
- (6 more...)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Skin Cancer (0.71)
- Materials > Chemicals > Industrial Gases > Liquified Gas (0.68)
- Materials > Chemicals > Commodity Chemicals > Petrochemicals > LNG (0.68)
A General-Purpose Multimodal Foundation Model for Dermatology
Yan, Siyuan, Yu, Zhen, Primiero, Clare, Vico-Alonso, Cristina, Wang, Zhonghua, Yang, Litao, Tschandl, Philipp, Hu, Ming, Tan, Gin, Tang, Vincent, Ng, Aik Beng, Powell, David, Bonnington, Paul, See, Simon, Janda, Monika, Mar, Victoria, Kittler, Harald, Soyer, H. Peter, Ge, Zongyuan
Diagnosing and treating skin diseases require advanced visual skills across multiple domains and the ability to synthesize information from various imaging modalities. Current deep learning models, while effective at specific tasks such as diagnosing skin cancer from dermoscopic images, fall short in addressing the complex, multimodal demands of clinical practice. Here, we introduce PanDerm, a multimodal dermatology foundation model pretrained through self-supervised learning on a dataset of over 2 million real-world images of skin diseases, sourced from 11 clinical institutions across 4 imaging modalities. We evaluated PanDerm on 28 diverse datasets covering a range of clinical tasks, including skin cancer screening, phenotype assessment and risk stratification, diagnosis of neoplastic and inflammatory skin diseases, skin lesion segmentation, change monitoring, and metastasis prediction and prognosis. PanDerm achieved state-of-the-art performance across all evaluated tasks, often outperforming existing models even when using only 5-10% of labeled data. PanDerm's clinical utility was demonstrated through reader studies in real-world clinical settings across multiple imaging modalities. It outperformed clinicians by 10.2% in early-stage melanoma detection accuracy and enhanced clinicians' multiclass skin cancer diagnostic accuracy by 11% in a collaborative human-AI setting. Additionally, PanDerm demonstrated robust performance across diverse demographic factors, including different body locations, age groups, genders, and skin tones. The strong results in benchmark evaluations and real-world clinical scenarios suggest that PanDerm could enhance the management of skin diseases and serve as a model for developing multimodal foundation models in other medical specialties, potentially accelerating the integration of AI support in healthcare.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Skin Cancer (0.98)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.88)
Too white, too male: scientist stakes out inclusive future for AI
Sometime around 1am on a warm night in June 2017, Fei-Fei Li was sitting in her pyjamas in a Washington, DC hotel room, practising a speech she would give in a few hours. Before going to bed, Li cut a full paragraph from her notes to be sure she could reach her most important points in the short time allotted. When she woke up, the five-foot three-inch expert in artificial intelligence put on boots and a black and navy knit dress, a departure from her frequent uniform of a T-shirt and jeans. Then she took an Uber to the Rayburn House Office Building, just south of the United States Capitol. Before entering the chambers of the US House Committee on Science, Space, and Technology, she lifted her phone to snap a photo of the oversized wooden doors. Then she stepped inside the cavernous room and walked to the witness table. The hearing that morning, titled "Artificial Intelligence – With Great Power Comes Great Responsibility," included Timothy Persons, chief scientist of the Government Accountability Office, and Greg Brockman, co-founder and chief technology officer of the non-profit organisation OpenAI. But only Li, the sole woman at the table, could lay claim to a groundbreaking accomplishment in the field of AI. As the researcher who built ImageNet, a database that helps computers recognise images, she's one of a tiny group of scientists – a group perhaps small enough to fit around a kitchen table – who are responsible for AI's recent remarkable advances. That June, Li was serving as the chief artificial intelligence scientist at Google Cloud and was on leave from her position as director of the Stanford Artificial Intelligence Lab.
- North America > United States > District of Columbia > Washington (0.24)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > New Jersey > Morris County > Parsippany (0.04)
- (5 more...)
Using Linked Data for Semi-Automatic Guesstimation
Abourbih, Jonathan Alexander (University of Edinburgh) | Bundy, Alan (University of Edinburgh) | McNeill, Fiona (University of Edinburgh)
GORT is a system that combines Linked Data from across several Semantic Web data sources to solve guesstimation problems, with user assistance. The system uses customised inference rules over the relationships in the OpenCyc ontology, combined with data from DBPedia, to reason and perform its calculations. The system is extensible with new Linked Data, as it becomes available, and is capable of answering a small range of guesstimation questions.
- North America > Canada (0.04)
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Morris County > Parsippany (0.04)
- (8 more...)